Session B-4

Federated Learning 4

Conference
8:30 AM — 10:00 AM EDT
Local
May 18 Thu, 5:30 AM — 7:00 AM PDT
Location
Babbio 104

OBLIVION: Poisoning Federated Learning by Inducing Catastrophic Forgetting

Chen Zhang (The Hang Seng University of Hong Kong, Hong Kong); Boyang Zhou, Zhiqiang He, Zeyuan Liu, Yanjiao Chen and Wenyuan Xu (Zhejiang University, China); Baochun Li (University of Toronto, Canada)

0
Federated learning is exposed to model poisoning attacks as compromised clients may submit malicious model updates to pollute the global model. To defend against such attacks, robust aggregation rules are designed for the centralized server to winnow out outlier updates, and to significantly reduce the effectiveness of existing poisoning attacks. In this paper, we develop an advanced model poisoning attack against defensive aggregation rules. In particular, we exploit the catastrophic forgetting phenomenon during the process of continual learning to destroy the memory of the global model. Our proposed framework, called OBLIVION, features two special components. The first component prioritizes the weights that have the most influence on the model accuracy for poisoning, which induces a more significant degradation on the global model than equally perturbing all weights. The second component smooths malicious model updates based on the number of selected compromised clients in the current round, adjusting the degree of poisoning to suit the dynamics of each training round. We implement a fully-functional prototype of OBLIVION in PLATO, a real-world scalable federated learning framework. Our extensive experiments over three datasets demonstrate that OBLIVION can boost the attack performance of model poisoning attacks against unknown defensive aggregation rules.
Speaker Yanjiao Chen (Zhejiang University)

Yanjiao Chen received her B.E. degree in Electronic Engineering from Tsinghua University in 2010 and Ph.D. degree in Computer Science and Engineering from Hong Kong University of Science and Technology in 2015. She is currently a Bairen researcher in Zhejiang University, China. Her research interests include AI security, network economics, and IoT security.


SplitGP: Achieving Both Generalization and Personalization in Federated Learning

Dong-Jun Han and Do-Yeon Kim (KAIST, Korea (South)); Minseok Choi (Kyung Hee University, Korea (South)); Christopher G. Brinton (Purdue University & Zoomi Inc., USA); Jaekyun Moon (KAIST, Korea (South))

1
A fundamental challenge to providing edge-AI services is the need for a machine learning (ML) model that achieves personalization (i.e., to individual clients) and generalization (i.e., to unseen data) properties concurrently. Existing techniques in federated learning (FL) have encountered a steep tradeoff between these objectives and impose large computational requirements on edge devices during training and inference. In this paper, we propose SplitGP, a new split learning solution that can simultaneously capture generalization and personalization capabilities for efficient inference across resource-constrained clients (e.g., mobile/IoT devices). Our key idea is to split the full ML model into client-side and server-side components, and impose different roles to them: the client-side model is trained to have strong personalization capability optimized to each client's main task, while the server-side model is trained to have strong generalization capability for handling all clients' out-of-distribution tasks. We analytically characterize the convergence behavior of SplitGP, revealing that all client models approach stationary points asymptotically. Further, we analyze the inference time in SplitGP and provide bounds for determining model split ratios. Experimental results show that SplitGP outperforms existing baselines by wide margins in inference time and test accuracy for varying amounts of out-of-distribution samples.
Speaker Dong-Jun Han (Purdue University)

Dong-Jun Han is currently a postdoctoral researcher at Purdue University working with Prof. Christopher G. Brinton and Prof. Mung Chiang. His research interest lies in the intersection of machine learning and communications/networking, and published papers in top-tier ML conferences (NeurIPS, ICML, ICLR), communications/networking conferences (INFOCOM) and journals (JSAC, TWC). He received his B.S., M.S., and Ph.D. degrees all from KAIST, South Korea.


Network Adaptive Federated Learning: Congestion and Lossy Compression

Parikshit Hegde and Gustavo de Veciana (The University of Texas at Austin, USA); Aryan Mokhtari (University of Texas at Austin, USA)

1
In order to achieve the dual goals of privacy and learning across distributed data, Federated Learning (FL) systems rely on frequent exchanges of large files (model updates) between a set of clients and the server. As such FL systems are exposed to, or indeed the cause of, congestion across a wide set of network resources. Lossy compression can be used to reduce the size of exchanged files and associated delays, at the cost of adding noise to model updates. By judiciously adapting clients' compression to varying network congestion, an FL application can reduce wall clock training time. To that end we propose a Network Adaptive Compression (NAC-FL) policy, which dynamically varies the client's lossy compression choices to network congestion variations. We prove, under appropriate assumptions, that NAC-FL is asymptotically optimal in terms of directly minimizing the expected wall clock training time. Further we show via simulation that NAC-FL achieves robust performance improvements with higher gains in settings with positively correlated delays
Speaker Parikshit Hegde (The University of Texas at Austin)

Parikshit Hegde is a 4th year PhD student in the Department of Electrical and Computer Engineering at The University of Texas at Austin. Previously he received his Bachelors and Masters from the Indian Institute of Technology Madras in Electrical Engineering. He is advised by Gustavo de Veciana in the Wireless, Networking and Communications Group (WNCG). His current research interests are in Federated Learning and Networks.



TVFL: Tunable Vertical Federated Learning towards Communication-Efficient Model Serving

Junhao Wang, Lan Zhang, Yihang Cheng and Shaoang Li (University of Science and Technology of China, China); Hong Zhang, Dongbo Huang and Xu Lan (Tencent, China)

0
Vertical federated learning (VFL) enables multiple participants with different data features and the same sample ID space to collaboratively train a model in a privacy-preserving way. However, the high computational and communication overheads hinder the adoption of VFL in many resource-limited or delay-sensitive applications. In this work, we focus on reducing the communication cost and delay incurred by the transmission of intermediate results in VFL model serving. We investigate the inference results, and find that a large portion of test samples can be predicted correctly by the active party alone, thus the corresponding communication for federated inference is dispensable. Based on this insight, we theoretically analyze the "dispensable communication" and propose a novel tunable vertical federated learning framework, named TVFL, to avoid "dispensable communication" in model serving as much as possible. TVFL can smartly switch between independent inference and federated inference based on the features of the input sample. We further reveal that such tunability is highly related to the importance of participants' features. Our evaluations on seven datasets and three typical VFL models show that TVFL can save 57.6% communication cost and reduce 57.1% prediction latency with little performance degradation.
Speaker Junhao Wang (University of Science and Technology of China)

PHD candidate at University of Science and Technology of China


Session Chair

Carla Fabiana Chiasserini

Session B-6

Federated Learning 5

Conference
3:30 PM — 5:00 PM EDT
Local
May 18 Thu, 12:30 PM — 2:00 PM PDT
Location
Babbio 104

More than Enough is Too Much: Adaptive Defenses against Gradient Leakage in Production Federated Learning

Fei Wang, Ethan Hugh and Baochun Li (University of Toronto, Canada)

0
With increasing concerns on privacy leakage from gradients, a variety of attack mechanisms emerged to recover private data from gradients at an honest-but-curious server, which challenged the primary advantage of privacy protection in federated learning. However, we cast doubt upon the real impact of these gradient attacks on production federated learning systems. By taking away several impractical assumptions that the literature has made, we find that gradient attacks pose a limited degree of threat to the privacy of raw data.

Through a comprehensive evaluation on existing gradient attacks in a federated learning system with practical assumptions, we have systematically analyzed their effectiveness under a wide range of configurations. We present key priors required to make the attack possible or stronger, such as a narrow distribution of initial model weights, as well as inversion at early stages of training. We then propose a new lightweight defense mechanism that provides \emph{sufficient} and \emph{self-adaptive} protection against time-varying levels of the privacy leakage risk throughout the federated learning process. Our experimental results demonstrate that \textsc{Outpost} can achieve a much better tradeoff than the state-of-the-art with respect to convergence performance, computational overhead, and protection against gradient attacks.
Speaker Jointly Presented by Fei Wang and Baochun Li (University of Toronto)

Fei Wang is a second-year Ph.D. student at the Edward S. Rogers Sr. Department of Electrical & Computer Engineering, University of Toronto, Canada, under the supervision of Prof. Baochun Li. She received her B.E. degree with honours from Hongyi Honor College, Wuhan University, China. Her research interests lie at the intersections of networking and communication and machine learning, especially deep reinforcement learning and federated learning. Her personal website is located at silviafeiwang.github.io.

Baochun Li is currently a Professor at the Department of Electrical and Computer Engineering, University of Toronto. He is a Fellow of IEEE.


Truthful Incentive Mechanism for Federated Learning with Crowdsourced Data Labeling

Yuxi Zhao, Xiaowen Gong and Shiwen Mao (Auburn University, USA)

0
Federated learning (FL) has emerged as a promising paradigm that trains machine learning (ML) models on clients' devices in a distributed manner without the need of transmitting clients' data to the FL server. In many applications of ML, the labels of training data need to be generated manually by human agents. In this paper, we study FL with crowdsourced data labeling where the local data of each participating client of FL are labeled manually by the client. We consider the strategic behavior of clients who may not make desired effort in their local data labeling and local model computation and may misreport their local models to the FL server. We characterize the performance bounds on the training loss as a function of clients' data labeling effort, local computation effort, and reported local models. We devise truthful incentive mechanisms which incentivize strategic clients to make truthful efforts and report true local models to the server. The truthful design exploits the non-trivial dependence of the training loss on clients' efforts and local models. Under the truthful mechanisms, we characterize the server's optimal local computation effort assignments. We evaluate the proposed FL algorithms with crowdsourced data labeling and the incentive mechanisms using experiments.
Speaker Yuxi Zhao (Auburn University)

Yuxi Zhao is a PhD graduated from the department of Electrical and Computer Engineering at Auburn University, USA. Her main research interests include federated learning and data crowdsourcing. She received IEEE INFOCOM’21 Student Travel Grant. She is a member of the IEEE, IEEE Young Professionals, and IEEE Communications Society.


SVDFed: Enabling Communication-Efficient Federated Learning via Singular-Value-Decomposition

Haolin Wang, Xuefeng Liu and Jianwei Niu (Beihang University, China); Shaojie Tang (University of Texas at Dallas, USA)

0
Federated learning (FL) is an emerging paradigm of distributed machine learning. However, when applied to wireless network scenarios, FL usually suffers from high communication cost because clients need to upload their updated gradients to a server in every training round. Although many gradient compression techniques like sparsification and quantization are proposed, they compress clients' gradients independently, without considering the correlations among gradients. In this paper, we propose SVDFed, a collaborative gradient compression framework for FL. SVDFed utilizes Singular Value Decomposition (SVD) to find a few basis vectors, whose linear combination can well represent clients' gradients at a certain round. Due to the correlations of gradients, these basis vectors can still well approximate clients' gradients in subsequent rounds. Therefore, clients can only transmit the coefficients of linear combination to the server, which greatly reduces communication cost. In addition, SVDFed leverages the classical PID (Proportional, Integral, Derivative) control to determine the proper time to re-calculate the basis vectors to maintain their representation ability. Through experiments, we demonstrate that SVDFed outperforms existing gradient compression methods in FL. For example, compared to a popular gradient quantization method QSGD, SVDFed can reduce the communication overhead by 66 \% and pending time by 99%.
Speaker Haolin Wang (Beihang University)

Haolin Wang received the B.S. degree in Computer Science and Engineering from Beihang University, Beijing, China, in 2022. He is currently working toward the M.S. degree in Computer Science and Engineering in Beihang University, Beijing, China. His research interests include Federated Learning.


Enabling Communication-Efficient Federated Learning via Distributed Compressed Sensing

Yixuan Guan and Xuefeng Liu (Beihang University, China); Tao Ren (Institute of Software Chinese Academy of Sciences, China); Jianwei Niu (Beihang University, China)

0
Federated learning (FL) trains a shared global model by periodically aggregating gradients from local devices. Communication overhead becomes a principal bottleneck in FL since participating devices usually suffer from limited bandwidth and unreliable connections in the uplink transmission. To address the problem, gradient compression based on compressed sensing (CS) has been put forward recently. However, most existing CS-based works compress gradients independently, ignoring the gradient correlations between participants or adjacent rounds, which constrains the achievement of higher compression rates. In view of this observation, we propose a novel gradient compression scheme named Federated Distributed Compressed Sensing (FedDCS), guided by the distributed compressed sensing theory (DCS). FedDCS can fully exploit the correlated gradients from previous rounds, known as side information, to assist the gradient reconstruction currently. Benefiting from this, the reconstruction performance is significantly improved on errors and iterations under the identical compression rate, and the total uploading bits to achieve convergence are considerably reduced. Theoretical analysis and extensive experiments conducted on MNIST and Fashion-MNIST both verify the effectiveness of our approach.
Speaker Yixuan Guan (Beihang University)

Yixuan Guan received the B.S. degree from college of communication engineering at Jilin University, Changchun, China, in 2016, and the M.S. degree from school of electronic and information engineering at South China University of Technology, Guangzhou, China, in 2020. He is currently pursuing the Ph.D. degree from school of computer science and engineering at Beihang University, Beijing, China. His research interests include Federated Learning and Data Compression.


Session Chair

Changqing Luo


Gold Sponsor


Gold Sponsor


Bronze Sponsor


Student Travel Grants


Student Travel Grants


Local Organizer

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · © 2023 Duetone Corp.